Curiosity-Driven Exploration with Planning Trajectories

نویسنده

  • Tyler Streeter
چکیده

Reinforcement learning (RL) agents can reduce learning time dramatically by planning with learned predictive models. Such planning agents learn to improve their actions using planning trajectories, sequences of imagined interactions with the environment. However, planning agents are not intrinsically driven to improve their predictive models, which is a necessity in complex environments. This problem can be solved by adding a curiosity drive that rewards agents for experiencing novel states. Curiosity acts as a higher form of exploration than simple random action selection schemes because it encourages targeted investigation of interesting situations. In a task with multiple external rewards, we show that RL agents using uncertainty-limited planning trajectories and intrinsic curiosity rewards outperform non-curious planning agents. The results show that curiosity helps drive planning agents to improve their predictive models by exploring uncertain territory. To the author’s knowledge, no previous work has tested the benefits of curiosity with planning trajectories.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Curiosity-Driven Development of Tool Use Precursors: a Computational Model

Studies of child development of tool use precursors show successive but overlapping phases of qualitatively different types of behaviours. We hypothesize that two mechanisms in particular play a role in the structuring of these phases: the intrinsic motivation to explore and the representation used to encode sensorimotor experience. Previous models showed how curiosity-driven learning mechanism...

متن کامل

Intrinsically motivated exploration as efficient active learning in unknown and unprepared spaces

Intrinsic motivations are mechanisms that guide curiosity-driven exploration (Berlyne, 1965). They have been proposed to be crucial for self-organizing developmental trajectories (Oudeyer et al. , 2007) as well as for guiding the learning of general and reusable skills (Barto et al., 2005). Here, we argue that they can be considered as “active learning” algorithms, and show that some of them al...

متن کامل

Learning tactile skills through curious exploration

We present curiosity-driven, autonomous acquisition of tactile exploratory skills on a biomimetic robot finger equipped with an array of microelectromechanical touch sensors. Instead of building tailored algorithms for solving a specific tactile task, we employ a more general curiosity-driven reinforcement learning approach that autonomously learns a set of motor skills in absence of an explici...

متن کامل

Computational Theories of Curiosity-Driven Learning

What are the functions of curiosity? What are the mechanisms of curiosity-driven learning? We approach these questions using concepts and tools from machine learning and developmental robotics. We argue that curiosity-driven learning enables organisms to make discoveries to solve complex problems with rare or deceptive rewards. By fostering exploration and discovery of a diversity of behavioura...

متن کامل

Artificial Curiosity for Autonomous Space Exploration

Curiosity is an essential driving force for science as well as technology, and has led mankind to explore its surroundings, all the way to our current understanding of the universe. Space science and exploration is at the pinnacle of each of these developments, in that it requires the most advanced technology, explores our world and outer space, and constantly pushes the frontier of scientific ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006